Web Survey Bibliography
Relevance & Research Question: Response latency measurement and eye tracking are two computer-assisted pretesting methods that may be particularly useful for evaluating Web questionnaires. In contrast to other techniques (e.g., expert reviews, qualitative interviews), both methods produce nonreactive and objective measures of behavior that are neither affected by the researcher (and the ways in which she tests the questions) nor by the research context. While previous studies have shown that longer response latencies and fixation times are indicative of problematic questions (Lenzner et al., 2010, 2011), little is known about the utility of the two methods (or measures) in the practical pretesting context (e.g., in testing draft questions). This study examines whether response latencies and fixation times are discriminative features to distinguish flawed from improved survey questions.
Methods & Data: In a laboratory experiment, respondents’ eye movements and response latencies were recorded while they were answering two versions of a Web questionnaire. One group (n=22) received a questionnaire including poorly worded questions and the other group (n=22) received the same questionnaire with improved question wordings. Given that response latencies and fixation times are highly individual, we computed the baseline fixation rate (eye tracking) and baseline reading rate (response latency) for every respondent from seven additional questions asked in the same Web survey. In the analyses, whenever the response or fixation times for a question exceeded respondents’ baseline by more than 15%, the question was deemed problematic. (The analyses were repeated with 10%, 20%, and 25% thresholds, but all conclusions remained unchanged).
Results: Fixation rate (eye tracking) was consistently more accurate than reading rate (response latency) in classifying the questions as flawed or improved. The overall accuracy of the fixation rate ranged from 60% to 85%, the accuracy of the reading rate from 43% to 70%. Also, the eye tracking measure resulted in considerably fewer misses (failures to detect problems) and fewer false alarms.
Added Value: This study suggests that fixation times and response latencies are potentially useful methods for pretesting (draft) Web questionnaires, albeit the level of accuracy with which they identify problematic questions is not yet satisfactory.
GOR Homepage (abstract) / (full text)
Web survey bibliography (364)
- Displaying Videos in Web Surveys: Implications for Complete Viewing and Survey Responses; 2017; Mendelson, J.; Lee Gibson, J.; Romano Bergstrom, J. C.
- Usability Testing for Survey Research; 2017; Geisen, E.; Romano Bergstrom, J. C.
- Where, When, How and with What Do Panel Interviews Take Place and Is the Quality of Answers Affected...; 2017; Niebruegge, S.
- Nonresponses as context-sensitive response behaviour of participants in online-surveys and their relevance...; 2017; Wetzlehuetter, D.
- Do distractions during web survey completion affect data quality? Findings from a laboratory experiment...; 2017; Wenz, A.
- Predicting Breakoffs in Web Surveys; 2017; Mittereder, F.; West, B. T.
- Comparing acquiescent and extreme response styles in face-to-face and web surveys; 2017; Liu, M.; Conrad, F. G.; Lee, S.
- Respondent mode choice in a smartphone survey ; 2017; Conrad, F. G., Schober, M. F., Antoun, C., Yan, H. Y., Hupp, A., Johnston, M., Ehlen, P., Vickers, L...
- Effects of Mobile versus PC Web on Survey Response Quality: a Crossover Experiment in a Probability...; 2017; Antoun, C.; Couper, M. P.; G. G.Conrad, F. G.
- Methods for Evaluating Respondent Attrition in Web-Based Surveys; 2016; Hochheimer, C. J.; Sabo, R. T.; Krist, A. H.; Day, T.; Cyrus, J.; Woolf, S. H.
- Mobile-only web survey respondents; 2016; Lugtig, P. J.; Toepoel, V.; Amin, A.
- Using official surveys to reduce bias of estimates from nonrandom samples collected by web surveys; 2016; Beresovsky, V.; Dorfman, A.; Rumcheva, P.
- Making use of Internet interactivity to propose a dynamic presentation of web questionnaires; 2016; Revilla, M.; Ochoa, C.; Turbina, A.
- Helping respondents provide good answers in Web surveys; 2016; Couper, M. P.; Zhang, C.
- Gamifying. Not all fun and games; 2016; Stubington, P.; Crichton, C.
- FocusVision 2015 Annual MR Technology Report; 2016; Macer, T., Wilson, S.
- Are sliders too slick for surveys?; 2016; Buskirk, T. D.
- Research gamification for quality pharmaceutical stakeholder insights; 2016; Mondry, B.; Fink, L.
- SurveyTester from Knowledge Navigators ; 2016; Macer, T.
- Simplifying your mobile solution; 2016; Berry, K.
- Effects of motivating question types with graphical support in multi channel design studies; 2016; Luetters, H.; Friedrich-Freksa, M.; Vitt, SGoldstein, D. G.
- Why Do Web Surveys Take Longer on Smartphones?; 2016; Couper, M. P.; J. J.Peterson, G. J.
- Usability Testing within Agile Process; 2016; Holland, T.
- Association of Eye Tracking with Other Usability Metrics ; 2016; Olmsted, E. L.
- Cognitive Probing Methods in Usability Testing – Pros and Cons; 2016; Nichols, E. M.
- Thinking Inside the Box Visual Design of the Response Box Affects Creative Divergent Thinking in an...; 2016; Mohr, A. H.; Sell, A.; Lindsay, T.
- Distractions: The Incidence and Consequences of Interruptions for Survey Respondents ; 2016; Ansolabehere, S.; Schaffner, B. F.
- The Effect of CATI Questions, Respondents, and Interviewers on Response Time; 2016; Olson, K.; Smyth, J. D.
- New Generation of Online Questionnaires?; 2016; Revilla, M.; Ochoa, C.; Turbina, A.
- The Analysis of Respondent’s Behavior toward Edit Messages in a Web Survey; 2016; Park, Y.
- Effects of Data Collection Mode and Response Entry Device on Survey Response Quality; 2016; Ha, L.; Zhang, Che.; Jiang, W.
- Navigation Buttons in Web-Based Surveys: Respondents’ Preferences Revisited in the Laboratory; 2016; Romano Bergstrom, J. C.; Erdman, C.; Lakhe, S.
- Online Surveys are Mixed-Device Surveys. Issues Associated with the Use of Different (Mobile) Devices...; 2016; Toepoel, V.; Lugtig, P. J.
- A Technical Guide to Effective and Accessible web Surveys; 2016; Baatard, G.
- The Validity of Surveys: Online and Offline; 2016; Wiersma, W.
- Computer-assisted and online data collection in general population surveys; 2016; Skarupova, K.
- A Framework of Incorporating Thai Social Networking Data in Online Marketing Survey; 2016; Jiamthapthaksin, R.; Aung, T. H.; Ratanasawadwat, N.
- Creation and Usability Testing of a Web-Based Pre-Scanning Radiology Patient Safety and History Questionnaire...; 2016; Robinson, T. J.; DuVall, S.; Wiggins III, R
- Comprehension and engagement in survey interviews with virtual agents; 2016; Conrad, F. G.; Schober, M. F.; Jans, M.; Orlowski, R. A.; Nielsen, D.; Levenstein, R. M.
- Taming Big Data: Using App Technology to Study Organizational Behavior on Social Media; 2015; Bail, C. A.
- A Meta-Analysis of Breakoff Rates in Mobile Web Surveys; 2015; Mavletova, A. M.; Couper, M. P.
- Optimizing the Decennial Census for Mobile – A Case Study; 2015; Nichols, E. M.; Hawala, E. O.; Horwitz, R.; Bentley, M.
- Using Video to Reinvigorate the Open Question; 2015; Cape, P.
- Are Sliders Too Slick for Surveys? An Experiment Comparing Slider and Radio Button Scales for Smartphone...; 2015; Aadland, D.; Aalberg, T.
- Web Surveys Optimized for Smartphones: Are there Differences Between Computer and Smartphone Users?; 2015; Andreadis, I.
- Designing web surveys for the multi-device internet; 2015; de Bruijne, M.
- Data Quality Standards in Mixed Mode Surveys; 2015; Bremer, J.; Barbulescu, M.; Bennett, J.
- Changing from CAPI to CAWI in an ongoing household panel - experiences from the German Socio-Economic...; 2015; Schupp, J.; Sassenroth, D.
- Rating Scales in Web Surveys: A Test of New Drag-and-Drop Rating Procedures; 2015; Kunz, T.
- A Review of Issues in Gamified Surveys; 2015; Keusch, F.; Zhang, Che.